This report looks is an update to the analysis shown on 1/14/2022. Most steps are the same with tweaking due to different data and outlier procedure
at exploring the relationship between wastewater and cases. There are four components to this analysis.
Removing putative outliers
Binning analysis
Smoothing signal
Statistical analysis
This report does not present any final answers but presents some very convincing heuristics.
Files Used:
./../../data/processed/MMSD_Interceptor_Cases_2_7_22.csv
./../../data/processed/LIMSWasteData_02-09-22.csv
The two data sets used in this analysis are the Madison case data sourced from the Wisconsin DHS and wastewater concentration data produced by the Wisconsin State Laboratory of Hygiene. This wastewater data has entries every couple of days from 15 September 2020 to 25 January 2022.
| Date | Site | Cases | SevenDayMACases | N1 | BCoV | N2 | PMMoV | GeoMeanN12 | FlowRate | temperature | equiv_sewage_amt |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 2020-09-15 | Madison | 140 | 110.42857 | 63618 | NA | 32447 | 25753384 | 45433.614 | 44.77 | NA | 2 |
| 2020-09-19 | Madison | 61 | 91.85714 | 11442 | 0.2700 | 9003 | 40968307 | 10149.499 | 46.70 | NA | 2 |
| 2020-09-22 | Madison | 83 | 76.00000 | 39145 | 1.0900 | 8102 | 22508539 | 17808.784 | 41.49 | NA | 2 |
| 2020-09-23 | Madison | 82 | 76.42857 | 30509 | 0.2744 | 7419 | 21980553 | 15044.809 | 41.65 | NA | 2 |
| 2020-09-24 | Madison | 80 | 77.42857 | 16743 | 0.0731 | 4437 | 21097519 | 8619.089 | 41.47 | NA | 2 |
| 2020-09-25 | Madison | 59 | 77.42857 | 28553 | 0.7774 | 1210 | 26700211 | 5877.851 | 41.07 | NA | 2 |
## min(Date) max(Date)
## 1 2020-09-15 2022-01-25
The case data has a strong weekend effect so for this section we look at a seven day smoothing of cases. The simple display of the data shows the core components of this story. First, wastewater data is noisy. And that there is a clear relationship between the two signals.
FirstImpressionDF <- FullDF%>%
NoNa(SelectedIndVar,"Cases")#Removing NA
FirstImpression <- FirstImpressionDF%>%
ggplot(aes(x=Date))+#Data depends on time
geom_point(aes(y=(Cases), color="Cases",info=Cases),size = 1)+
geom_line(aes(y=MinMaxFixing(!!sym(SelectedIndVar),Cases),
color=SelectedIndVar,
info=!!sym(SelectedIndVar)))+#compares SelectedIndVar to Cases
geom_line(aes(y=(SevenDayMACases),
color="Seven Day MA Cases",
info=Cases))+
labs(y="Reported cases")+
ColorRule
ggplotly(FirstImpression)%>%
add_lines(x=~Date, y=FirstImpressionDF[[SelectedIndVar]],
yaxis="y2", data=FirstImpressionDF, showlegend=FALSE, inherit=FALSE) %>%
layout(yaxis2 = SecondAxisFormat,
legend=list(title=list(text=''),x = 1.15, y = 0.9),
margin = list(l = 50, r = 75, b = 50, t = 50, pad = 4))
Figure 1.1: Wastewater concentration and daily Covid-19 case data for Madison. A seven day moving average of cases is used to reduce a day of the week effect.
#To remoive weekend effects we are looking at the 7 day smoothing of cases.
Looking at the wastewater measurements we observe there were some points many times larger than adjacent values hinting at them being outliers. We used the adjacent 10 values on each side and marked points 2.5 standard deviations away from the group mean as outliers.
#default pass to IdentifyOutliers
#method="SD", align="center", n = 5, Bin = 21, Action = "Flag"
ErrorMarkedDF <- FullDF%>%#
mutate(FlagedOutliers = IdentifyOutliers(!!sym(SelectedIndVar), Action = "Flag"),
#Manual flagging that method misses due to boundary effect with binning
FlagedOutliers = ifelse(Date == mdy("01/24/2022"),
TRUE, FlagedOutliers),
NoOutlierVar = ifelse(FlagedOutliers, NA, !!sym(SelectedIndVar)))
#Split N1 into outlier and non outlier for next ggplot
OutLierPlotDF <- ErrorMarkedDF%>%
mutate(!!OutlierName := ifelse(FlagedOutliers,!!sym(SelectedIndVar),NA))%>%
mutate(!!SelectedIndVar := NoOutlierVar)
OutLierPlotObject <- OutLierPlotDF%>%
filter(!(is.na(!!sym(SelectedIndVar))&is.na(!!sym(OutlierName))))%>%
ggplot(aes(x=Date))+#Data depends on time
geom_line(aes(y=!!sym(SelectedIndVar),
color=SelectedIndVar,
info = !!sym(SelectedIndVar)))+#compares Var to Cases
geom_point(aes(y=!!sym(OutlierName),
color=OutlierName,
info = !!sym(OutlierName)))+
ColorRule
#mentioned hand picked list other choices
ggplotly(OutLierPlotObject,tooltip=c("info","Date"))%>%
layout(yaxis = SecondAxisFormat,
legend=list(title=list(text=''),x = 1.15, y = 0.9),
margin = list(l = 50, r = 75, b = 50, t = 50, pad = 4))
Figure 2.1: Wastewater concentration for Madison with potential outliers marked. Using a rolling symmetrical bin of 21 days as a sample we use 2.5 standard deviations of the bin as a metric to reject extreme points. This process is ran multiple times to get a robust process to select outliers.
#Drop Var create Var filter
UpdatedDF <- ErrorMarkedDF%>%
select(-sym(SelectedIndVar))%>%
rename(!!sym(SelectedIndVar) := NoOutlierVar)
To isolate this relationship we used a primitive binning relationship. We used non overlapping bins of 2 weeks and took the median of the data within that range. This reduces autocorrelation issues and masks potential noise in the data. We see a very strong trend slightly improved without the outliers.
#StartDate is Where the binning starts
#DaySmoothing is The size of the bins
#Lag is The offset between Cases and Var
BinnedVarName <- paste("Binned",SelectedIndVar)
Bining <- function(DF,StartDate=1,DaySmoothing=14,Lag=0){
BinDF <- DF%>%
select(Date, Cases, !!sym(SelectedIndVar))%>%
mutate(MovedCases = data.table::shift(Cases, Lag),#moving SLD lag days backwards
Week=(as.numeric(Date)+StartDate)%/%DaySmoothing)%>%#putting variables into bins via integer division
group_by(Week)%>%
summarise("Binned Cases" := median(MovedCases, na.rm=TRUE),
!!BinnedVarName := (median((!!sym(SelectedIndVar)),
na.rm=TRUE)),
Date = median(Date,na.rm = TRUE))#summarize data within bins.
return(BinDF)
}
BinErrorRemovedDF <- Bining(UpdatedDF)
BinErrorKeptDF <- Bining(FullDF)
DiffrenceDF <- inner_join(BinErrorRemovedDF,BinErrorKeptDF,by=c("Week","Date"))%>%
filter(!!paste0(BinnedVarName,".x") != !!paste0(BinnedVarName,".y"))
BinedCorrGraph <- ggplot()+
geom_segment(aes(x = !!sym("Binned Cases.x"),
y = !!sym(paste0(BinnedVarName,".x")),
xend = !!sym("Binned Cases.y"),
yend = !!sym(paste0(BinnedVarName,".y"))),
data = DiffrenceDF)+
geom_point(aes(x = !!sym("Binned Cases"),
y = !!sym(BinnedVarName),
color = "outliers not removed",
info = Date),
size = 2,
data = BinErrorKeptDF,
shape=15)+
geom_point(aes(x = !!sym("Binned Cases"),
y = !!sym(BinnedVarName),
color = "outliers removed",
info = Date),
data = BinErrorRemovedDF)+
ggtitle(paste0(BinnedVarName,", Cases removed potential outliers"))+
#geom_abline(slope = 3000)+
labs(x="Binned Cases",y=BinnedVarName)
ggplotly(BinedCorrGraph,tooltip=c("x","y","info"))%>%
layout(legend=list(title=list(text=''),x = 1.15, y = 0.9),
margin = list(l = 50, r = 75, b = 50, t = 50, pad = 4))
Figure 3.1: Binned wastewater concentration and daily cases for Madison. Red squares are the median value of the bins without removing the flagged outliers. Blue circles are the median value of the bins removing the flagged outliers. The Black line connects the red square and blue circle representing the same bin.
#cor(BinDF$BinnedN1, BinDF$BinnedCases, use="pairwise.complete.obs")
#summary(lm(BinnedCases~BinnedN1, data=BinDF))
OutputBinning <- data.frame(row.names=c("correlation"),
WithOutliers = c(cor(BinErrorKeptDF[[BinnedVarName]],
BinErrorKeptDF[["Binned Cases"]],
use="pairwise.complete.obs")),
WithOutOutliers = c(cor(BinErrorRemovedDF[[BinnedVarName]],
BinErrorRemovedDF[["Binned Cases"]],
use="pairwise.complete.obs")))
formattable(OutputBinning)
| WithOutliers | WithOutOutliers | |
|---|---|---|
| correlation | 0.8124764 | 0.8232805 |
The goal in this section is to smooth the data to get a similar effect without losing resolution.
A key component to this is that the relationship between N1 and Case involves a gamma distribution modeling both the time between catching Covid-19 and getting a test and the concentration of the shedded particles. We found a gamma distribution with mean 11.73 days and a standard deviation of 7.68 gives good results and matches other research (Fernandez-Cassi et al. 2021).
Mean <- 11.73
StandardDeviation <- 7.68
Scale = StandardDeviation^2/Mean
Shape = Mean/Scale
SLDWidth <- 21
weights <- dgamma(1:SLDWidth, scale = Scale, shape = Shape)
par(mar=c(4,4,4,10))
plot(weights,
main=paste("Gamma Distribution with mean =",Mean, "days,and SD =",StandardDeviation),
ylab = "Weight",
xlab = "Lag")
Figure 4.1: gamma distribution used for shedding lag distribution
SLDSmoothedDF <- UpdatedDF%>%
mutate(
SLDCases = c(rep(NA,SLDWidth-1), #elimination of starting values not relevant
#as we have a 50+ day buffer of case data
rollapply(Cases,width=SLDWidth,FUN=weighted.mean,
w=weights,
na.rm = TRUE)
#,rep(NA,10)
))#no missing data to remove
SLDPlot = SLDSmoothedDF%>%
#NoNa("SLDCases")%>%#same plot as earlier but with the SLD smoothing
ggplot(aes(x=Date))+
geom_line(aes(y=Cases,
color="Cases" , info = Cases),alpha=.2)+
geom_line(aes(y=SevenDayMACases,
color="Seven Day MA Cases" , info = SevenDayMACases),alpha=.4)+
geom_line(aes(y=SLDCases, color="SLD Cases",info = SLDCases))+
labs(y="Reported Cases")+
ColorRule
ggplotly(SLDPlot,tooltip=c("info","Date"))%>%
layout(legend=list(title=list(text=''),x = 1.15, y = 0.9),
margin = list(l = 50, r = 75, b = 50, t = 50, pad = 4))
Figure 4.2: Madison Case data for Madison. SLD Cases is a weighted mean of cases using the gamma distribution as the weight distribution.
Cross correlation and Granger Causality are key components to formalize this analysis. Cross correlation looks at the correlation at a range of time shifts and Granger analysis performs a test for predictive power.
CCFChar <- function(ccfObject){
LargestC = max(ccfObject$acf)
Lag = which.max(ccfObject$acf)-21
return(c(LargestC,Lag))
}
ModelTesting <- function(DF,Var1,Var2){
UsedDF <- DF%>%
NoNa(Var1,Var2)#removing rows from before both series started
Vec1 <- unname(unlist(UsedDF[Var1]))
Vec2 <- unname(unlist(UsedDF[Var2]))
CCFReport <- CCFChar(ccf(Vec1,Vec2,na.action=na.pass,plot = FALSE))
VarPredCase <- grangertest(Vec1, Vec2, order = 1)$"Pr(>F)"[2]
CasePredVar <- grangertest(Vec2,Vec1, order = 1)$"Pr(>F)"[2]
return(round(c(CCFReport,CasePredVar,VarPredCase),4))
}
#ErrorRemovedDF
BaseLine <- ModelTesting(FullDF,SelectedIndVar,"Cases")
BaseLineSevenDay <- ModelTesting(FullDF,SelectedIndVar,"SevenDayMACases")
ErrorRemoved <- ModelTesting(UpdatedDF,SelectedIndVar,"SevenDayMACases")
SLDVar <- ModelTesting(SLDSmoothedDF,SelectedIndVar,"SLDCases")
SevenLoess <- ModelTesting(SLDSmoothedDF,loessVar,"SevenDayMACases")
SLDLoess <- ModelTesting(SLDSmoothedDF,loessVar,"SLDCases")
Output <- data.frame(row.names=c("Max Cross Correlation","Lag of largest Cross correlation","P-value WasteWater predicts Cases","P-value Cases predicts wastewater"),
CasesvsVar = BaseLine,
SevenDayMACasesvsVar = BaseLineSevenDay,
ErrorRemoved = ErrorRemoved,
SLDVar = SLDVar,
SevenLoess = SevenLoess,
SLDLoess = SLDLoess)
OutputRightPosition <- transpose(Output)
colnames(OutputRightPosition) <- rownames(Output)
rownames(OutputRightPosition) <- c(paste("Section 1: Cases vs" , SelectedIndVar),
paste("Section 1: 7 Day MA Cases vs" , SelectedIndVar),
paste("Section 2: Cases vs" , SelectedIndVar),
paste(" Section 4.2: SLD Cases vs ",SelectedIndVar),
paste("Section 4.3: 7 Day MA Cases vs Loess smoothing of ",SelectedIndVar),
paste("Section 4.3: SLD Cases vs Loess smoothing of ",SelectedIndVar))
formattable(OutputRightPosition)
| Max Cross Correlation | Lag of largest Cross correlation | P-value WasteWater predicts Cases | P-value Cases predicts wastewater | |
|---|---|---|---|---|
| Section 1: Cases vs N1 | 0.4370 | 6 | 0e+00 | 0.0000 |
| Section 1: 7 Day MA Cases vs N1 | 0.4207 | 3 | 0e+00 | 0.5489 |
| Section 2: Cases vs N1 | 0.7863 | 0 | 0e+00 | 0.0001 |
| Section 4.2: SLD Cases vs N1 | 0.7533 | -6 | 3e-04 | 0.0000 |
| Section 4.3: 7 Day MA Cases vs Loess smoothing of N1 | 0.8653 | 0 | 0e+00 | 0.0000 |
| Section 4.3: SLD Cases vs Loess smoothing of N1 | 0.8165 | -5 | 0e+00 | 0.0000 |